具有多视图属性的光场(LF)图像具有许多应用程序,可以严重受到低光成像的影响。低光增强的最新基于学习的方法具有自己的缺点,例如在极低的光线条件下没有噪声抑制,复杂的训练过程和差的性能。针对解决这些缺陷的目标,同时充分利用了多视图信息,我们为LF图像提出了有效的低光修复变压器(LRT),并具有多个头部以执行特定的中间任务,包括DeNosising,亮度调整,完善和细节增强,增强和细节,增强,并增强细节,在单个网络中,实现从小规模到全尺度的渐进式恢复。我们设计了一个具有视角方案的角变压器块,以有效地对全局角关系进行建模,并设计一个基于窗口的多尺度变压器块来编码多规模的本地和全局空间信息。为了解决训练数据不足的问题,我们通过使用LF摄像机的估计噪声参数模拟主要噪声来制定合成管道。实验结果表明,我们的方法可以在恢复具有高效率的极低光线和嘈杂的LF图像上实现卓越的性能。
translated by 谷歌翻译
在视频deNoising中,相邻的框架通常提供非常有用的信息,但是需要准确的对齐方式,然后才能刺激此类信息。在这项工作中,我们提出了一个多对准网络,该网络生成多个流动建议,然后是基于注意的平均。它用于模仿非本地机制,通过平均多个观测来抑制噪声。我们的方法可以应用于基于流量估计的各种最新模型。大规模视频数据集上的实验表明,我们的方法通过0.2DB提高了Denoisis基线模型,并通过模型蒸馏进一步将参数降低了47%。代码可在https://github.com/indigopurple/manet上找到。
translated by 谷歌翻译
在光子 - 稀缺情况下的成像引入了许多应用的挑战,因为捕获的图像具有低信噪比和较差的亮度。在本文中,我们通过模拟量子图像传感器(QIS)的成像来研究低光子计数条件下的原始图像恢复。我们开发了一个轻量级框架,由多级金字塔去噪网络(MPDNET)和亮度调整(LA)模块组成,以实现单独的去噪和亮度增强。我们框架的主要组成部分是多跳过的剩余块(MARB),其集成了多尺度特征融合和注意机制,以实现更好的特征表示。我们的MPDNET采用拉普拉斯金字塔的想法,以了解不同级别的小规模噪声图和大规模的高频细节,在多尺度输入图像上进行特征提取,以编码更丰富的上下文信息。我们的LA模块通过估计其照明来增强去噪图像的亮度,这可以更好地避免颜色变形。广泛的实验结果表明,通过抑制噪声并有效地恢复亮度和颜色,我们的图像恢复器可以在具有各种光子水平的具有各种光子水平的降解图像上实现优异的性能。
translated by 谷歌翻译
现有的数据驱动和反馈流量控制策略不考虑实时数据测量的异质性。此外,对于缺乏数据效率,传统的加固学习方法(RL)方法通常会缓慢收敛。此外,常规的最佳外围控制方案需要对系统动力学的精确了解,因此对内源性不确定性会很脆弱。为了应对这些挑战,这项工作提出了一种基于不可或缺的增强学习(IRL)的方法来学习宏观交通动态,以进行自适应最佳周边控制。这项工作为运输文献做出了以下主要贡献:(a)开发连续的时间控制,并具有离散增益更新以适应离散时间传感器数据。 (b)为了降低采样复杂性并更有效地使用可用数据,将体验重播(ER)技术引入IRL算法。 (c)所提出的方法以“无模型”方式放松模型校准的要求,该方式可以稳健地进行建模不确定性,并通过数据驱动的RL算法增强实时性能。 (d)通过Lyapunov理论证明了基于IRL的算法和受控交通动力学的稳定性的收敛性。最佳控制定律被参数化,然后通过神经网络(NN)近似,从而缓解计算复杂性。在不需要模型线性化的同时,考虑了状态和输入约束。提出了数值示例和仿真实验,以验证所提出方法的有效性和效率。
translated by 谷歌翻译
尽管收集了越来越多的数据集用于培训3D对象检测模型,但在LiDar扫描上注释3D盒仍然需要大量的人类努力。为了自动化注释并促进了各种自定义数据集的生产,我们提出了一个端到端的多模式变压器(MTRANS)自动标签器,该标签既利用LIDAR扫描和图像,以生成来自弱2D边界盒的精确的3D盒子注释。为了减轻阻碍现有自动标签者的普遍稀疏性问题,MTRAN通过基于2D图像信息生成新的3D点来致密稀疏点云。凭借多任务设计,MTRANS段段前景/背景片段,使LIDAR POINT CLUENS云密布,并同时回归3D框。实验结果验证了MTRAN对提高生成标签质量的有效性。通过丰富稀疏点云,我们的方法分别在Kitti中度和硬样品上获得了4.48 \%和4.03 \%更好的3D AP,而不是最先进的自动标签器。也可以扩展Mtrans以提高3D对象检测的准确性,从而在Kitti硬样品上产生了显着的89.45 \%AP。代码位于\ url {https://github.com/cliu2/mtrans}。
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译
Audio-Visual scene understanding is a challenging problem due to the unstructured spatial-temporal relations that exist in the audio signals and spatial layouts of different objects and various texture patterns in the visual images. Recently, many studies have focused on abstracting features from convolutional neural networks while the learning of explicit semantically relevant frames of sound signals and visual images has been overlooked. To this end, we present an end-to-end framework, namely attentional graph convolutional network (AGCN), for structure-aware audio-visual scene representation. First, the spectrogram of sound and input image is processed by a backbone network for feature extraction. Then, to build multi-scale hierarchical information of input features, we utilize an attention fusion mechanism to aggregate features from multiple layers of the backbone network. Notably, to well represent the salient regions and contextual information of audio-visual inputs, the salient acoustic graph (SAG) and contextual acoustic graph (CAG), salient visual graph (SVG), and contextual visual graph (CVG) are constructed for the audio-visual scene representation. Finally, the constructed graphs pass through a graph convolutional network for structure-aware audio-visual scene recognition. Extensive experimental results on the audio, visual and audio-visual scene recognition datasets show that promising results have been achieved by the AGCN methods. Visualizing graphs on the spectrograms and images have been presented to show the effectiveness of proposed CAG/SAG and CVG/SVG that could focus on the salient and semantic relevant regions.
translated by 谷歌翻译
As various city agencies and mobility operators navigate toward innovative mobility solutions, there is a need for strategic flexibility in well-timed investment decisions in the design and timing of mobility service regions, i.e. cast as "real options" (RO). This problem becomes increasingly challenging with multiple interacting RO in such investments. We propose a scalable machine learning based RO framework for multi-period sequential service region design & timing problem for mobility-on-demand services, framed as a Markov decision process with non-stationary stochastic variables. A value function approximation policy from literature uses multi-option least squares Monte Carlo simulation to get a policy value for a set of interdependent investment decisions as deferral options (CR policy). The goal is to determine the optimal selection and timing of a set of zones to include in a service region. However, prior work required explicit enumeration of all possible sequences of investments. To address the combinatorial complexity of such enumeration, we propose a new variant "deep" RO policy using an efficient recurrent neural network (RNN) based ML method (CR-RNN policy) to sample sequences to forego the need for enumeration, making network design & timing policy tractable for large scale implementation. Experiments on multiple service region scenarios in New York City (NYC) shows the proposed policy substantially reduces the overall computational cost (time reduction for RO evaluation of > 90% of total investment sequences is achieved), with zero to near-zero gap compared to the benchmark. A case study of sequential service region design for expansion of MoD services in Brooklyn, NYC show that using the CR-RNN policy to determine optimal RO investment strategy yields a similar performance (0.5% within CR policy value) with significantly reduced computation time (about 5.4 times faster).
translated by 谷歌翻译
Springs are efficient in storing and returning elastic potential energy but are unable to hold the energy they store in the absence of an external load. Lockable springs use clutches to hold elastic potential energy in the absence of an external load but have not yet been widely adopted in applications, partly because clutches introduce design complexity, reduce energy efficiency, and typically do not afford high-fidelity control over the energy stored by the spring. Here, we present the design of a novel lockable compression spring that uses a small capstan clutch to passively lock a mechanical spring. The capstan clutch can lock up to 1000 N force at any arbitrary deflection, unlock the spring in less than 10 ms with a control force less than 1 % of the maximal spring force, and provide an 80 % energy storage and return efficiency (comparable to a highly efficient electric motor operated at constant nominal speed). By retaining the form factor of a regular spring while providing high-fidelity locking capability even under large spring forces, the proposed design could facilitate the development of energy-efficient spring-based actuators and robots.
translated by 谷歌翻译
Unmanned aerial vehicles (UAVs) mobility enables flexible and customized federated learning (FL) at the network edge. However, the underlying uncertainties in the aerial-terrestrial wireless channel may lead to a biased FL model. In particular, the distribution of the global model and the aggregation of the local updates within the FL learning rounds at the UAVs are governed by the reliability of the wireless channel. This creates an undesirable bias towards the training data of ground devices with better channel conditions, and vice versa. This paper characterizes the global bias problem of aerial FL in large-scale UAV networks. To this end, the paper proposes a channel-aware distribution and aggregation scheme to enforce equal contribution from all devices in the FL training as a means to resolve the global bias problem. We demonstrate the convergence of the proposed method by experimenting with the MNIST dataset and show its superiority compared to existing methods. The obtained results enable system parameter tuning to relieve the impact of the aerial channel deficiency on the FL convergence rate.
translated by 谷歌翻译